On a parameter estimation method for Gibbs-Markov random fields
نویسندگان
چکیده
منابع مشابه
On a Parameter Estimation Method for Gibbs-Markov Random Fields
Fig. 2. space by Patrick-Fisher's algorithm (solid line) and E (dotted line). Bayes error estimates for SONAR data transformed to IO-dimensional high-dimensional data these results might be more in favor of E.) This is a result of the fact that each iteration of simplex requires that the samples be transformed to the low-dimensional space, and then the Bayes error estimated in that space, which...
متن کاملParameter Estimation for Inhomogeneous Markov Random Fields Using PseudoLikelihood
We describe an algorithm for locally-adaptive parameter estimation of spatially inhomogeneous Markov random elds (MRFs). In particular, we establish that there is a unique solution which maximizes the local pseudo-likelihood in the inhomogeneous MRF model. Subsequently we demonstrate how Besag's iterative conditional mode (ICM) procedure can be generalized from homogeneous MRFs to inhomogeneous...
متن کاملGibbs Random Fields : Temperature and Parameter Analysis
Gibbs random eld (GRF) models work well for synthesizing complex natural-looking image data with a small number of parameters; however, estimation methods for these parameters have a lot of problems. This paper addresses the analysis problem in a new way by examining the role of the temperature parameter of the Gibbs distribution. Studies of the model energy with respect to the temperature are ...
متن کاملMarkov Random Fields and Gibbs Measures
A Markov random field is a name given to a natural generalization of the well known concept of a Markov chain. It arrises by looking at the chain itself as a very simple graph, and ignoring the directionality implied by “time”. A Markov chain can then be seen as a chain graph of stochastic variables, where each variable has the property that it is independent of all the others (the future and p...
متن کاملRevisiting Boltzmann learning: parameter estimation in Markov random fields
This contribution concerns a generalization of the Boltzmann Machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization and generalization in the context of Boltzmann Machines. We provide an illustrative example c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 1994
ISSN: 0162-8828
DOI: 10.1109/34.277597